recent

MIT Sloan reading list: 7 books from 2024

‘Energy poverty’ hits US residents more in the South and Southwest

To help improve the accuracy of generative AI, add speed bumps

Credit: Evgeny Gromov / iStock

Ideas Made to Matter

Artificial Intelligence

Why companies need artificial intelligence explainability

By

Creating successful artificial intelligence programs doesn’t end with building the right AI system.  These programs also need to be integrated into an organization, and stakeholders — particularly employees and customers — need to trust that the AI program is accurate and trustworthy.

This is the case for building enterprisewide artificial intelligence explainability, according to a new research briefing by Ida Someh, Barbara Wixom, and Cynthia Beath of the MIT Center for Information Systems Research. The researchers define artificial intelligence explainability as “the ability to manage AI initiatives in ways that ensure models are value-generating, compliant, representative, and reliable.”

Read the report

The researchers identified four characteristics of artificial intelligence programs that can make it hard for stakeholders to trust them, and ways they can be overcome:

1. Unproven value. Because artificial intelligence is still relatively new, there isn’t an extensive list of proven use cases. Leaders are often uncertain if and how their company will see returns from AI programs.

To address this, companies need to create value formulation practices, which help people substantiate how AI can be a good investment in ways that are appealing to a variety of stakeholders.

2. Model opacity. Artificial intelligence relies on complex math and statistics, so it can be hard to tell if a model is producing accurate results and is compliant and ethical.

To address this, companies should develop decision tracing practices, which help artificial intelligence teams unravel the mathematics and computations behind models and convey how they work to the people who use them. These practices can include using visuals like diagrams and charts.

Related Articles

Machine learning developers should talk to end users
The argument for data-centric artificial intelligence
The hidden work created by artificial intelligence programs

3. Model drift. An AI model will produce biased results if the data used to train it is biased. And models can “drift” over time, meaning they can start producing inaccurate results as the world changes or incorrect data is included in the model.

Bias remediation practices can help AI teams address model drift and bias by exposing how models reach decisions. If a team detects an unusual pattern, stakeholders can review it, for example.

4. Mindless application. AI model results are not definitive. Treating them as such can be risky, especially if they are being applied to new cases or contexts.
 

Companies can remedy this by creating boundary setting practices, which provide guidance for applying AI applications mindfully and avoiding unexpected outcomes or unintended consequences.

Artificial intelligence explainability is an emerging field. Teams working on AI projects are mostly “creating the playbook as they go,” the researchers write. Organizations need to proactively develop and share good practices.

The researchers recommended starting by: identifying units and organizations that are already creating effective AI explanations; identifying practices that the organization’s own AI project teams have already adopted; and continuing to test the most promising practices and institutionalizing the best ones.

Read next: 5 data monetization tools that help AI initiatives 

For more info Sara Brown Senior News Editor and Writer